Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available March 31, 2026
-
Automated canopy stress classification for field crops has traditionally relied on single-perspective, two-dimensional (2D) photographs, usually obtained through top-view imaging using unmanned aerial vehicles (UAVs). However, this approach may fail to capture the full extent of plant stress symptoms, which can manifest throughout the canopy. Recent advancements in LiDAR technologies have enabled the acquisition of high-resolution 3D point cloud data for the entire canopy, offering new possibilities for more accurate plant stress identification and rating. This study explores the potential of leveraging 3D point cloud data for improved plant stress assessment. We utilized a dataset of RGB 3D point clouds of 700 soybean plants from a diversity panel exposed to iron deficiency chlorosis (IDC) stress. From this unique set of 700 canopies exhibiting varying levels of IDC, we extracted several representations, including (a) handcrafted IDC symptom-specific features, (b) canopy fingerprints, and (c) latent feature-based features. Subsequently, we trained several classification models to predict plant stress severity using these representations. We exhaustively investigated several stress representations and model combinations for the 3-D data. We also compared the performance of these classification models against similar models that are only trained using the associated top-view 2D RGB image for each plant. Among the feature-model combinations tested, the 3D canopy fingerprint features trained with a support vector machine yielded the best performance, achieving higher classification accuracy than the best-performing model based on 2D data built using convolutional neural networks. Our findings demonstrate the utility of color canopy fingerprinting and underscore the importance of considering 3D data to assess plant stress in agricultural applications.more » « less
-
Abstract Insect pests significantly impact global agricultural productivity and crop quality. Effective integrated pest management strategies require the identification of insects, including beneficial and harmful insects. Automated identification of insects under real-world conditions presents several challenges, including the need to handle intraspecies dissimilarity and interspecies similarity, life-cycle stages, camouflage, diverse imaging conditions, and variability in insect orientation. An end-to-end approach for training deep-learning models, InsectNet, is proposed to address these challenges. Our approach has the following key features: (i) uses a large dataset of insect images collected through citizen science along with label-free self-supervised learning to train a global model, (ii) fine-tuning this global model using smaller, expert-verified regional datasets to create a local insect identification model, (iii) which provides high prediction accuracy even for species with small sample sizes, (iv) is designed to enhance model trustworthiness, and (v) democratizes access through streamlined machine learning operations. This global-to-local model strategy offers a more scalable and economically viable solution for implementing advanced insect identification systems across diverse agricultural ecosystems. We report accurate identification (>96% accuracy) of numerous agriculturally and ecologically relevant insect species, including pollinators, parasitoids, predators, and harmful insects. InsectNet provides fine-grained insect species identification, works effectively in challenging backgrounds, and avoids making predictions when uncertain, increasing its utility and trustworthiness. The model and associated workflows are available through a web-based portal accessible through a computer or mobile device. We envision InsectNet to complement existing approaches, and be part of a growing suite of AI technologies for addressing agricultural challenges.more » « less
-
Abstract Mungbean (Vigna radiata(L.) Wizcek) is an important pulse crop, increasingly used as a source of protein, fiber, low fat, carbohydrates, minerals, and bioactive compounds in human diets. Mungbean is a dicot plant with trifoliate leaves. The primary component of many plant functions, including photosynthesis, light interception, and canopy structure, are leaves. The objectives were to investigate leaf morphological attributes, use image analysis to extract leaf morphological traits from photos from the Iowa Mungbean Diversity (IMD) panel, create a regression model to predict leaflet area, and undertake association mapping. We collected over 5000 leaf images of the IMD panel consisting of 484 accessions over 2 years (2020 and 2021) with two replications per experiment. Leaf traits were extracted using image analysis, analyzed, and used for association mapping. Morphological diversity included leaflet type (oval or lobed), leaflet size (small, medium, large), lobed angle (shallow, deep), and vein coloration (green, purple). A regression model was developed to predict each ovate leaflet's area (adjustedR2 = 0.97; residual standard errors of < = 1.10). The candidate genesVradi01g07560,Vradi05g01240,Vradi02g05730, andVradi03g00440are associated with multiple traits (length, width, perimeter, and area) across the leaflets (left, terminal, and right). These are suitable candidate genes for further investigation in their role in leaf development, growth, and function. Future studies will be needed to correlate the observed traits discussed here with yield or important agronomic traits for use as phenotypic or genotypic markers in marker‐aided selection methods for mungbean crop improvement.more » « less
An official website of the United States government
